The Negative Infinity Before the Singularity

By
IMG 20260426 WA0010

Artificial Intelligence [AI, software systems that learn patterns from data and generate predictions, text, images, voice, video, or decisions] is not morally improved by being technically impressive; a knife remains a knife whether it was forged by a village blacksmith or optimized by a billion-parameter model humming in a data center.

That is the first hard fact India must swallow without syrup. AI will not be distributed according to virtue. It will be distributed according to bandwidth, incentive, cunning, capital, desperation, weak enforcement, social vulnerability, and the old Indian genius for bending any new instrument toward private advantage before public ethics has finished looking for its spectacles. The same technology that can summarize a discharge note, translate a rural patient’s symptoms, detect a diabetic retina lesion, or support a remote physician can also clone a daughter’s voice, fabricate a compromising video, impersonate a bank officer, manufacture a political prophet, or flood a WhatsApp group with synthetic certainty wrapped in nationalist theater and family melodrama. That is not a science-fiction problem. It is a systems problem with human teeth.

The cheerful version says AI will democratize expertise. The less marketable version says it will first democratize forgery. This matters especially in India because the country has an unusually combustible mixture: massive digital adoption, uneven education, high social shame, fragile institutional trust, overworked policing, spectacular smartphone penetration, cheap data, weak privacy habits, and an economy where many people live close enough to financial panic that a scammer does not need to be brilliant; he only needs to be early, loud, and convincing. In such a place, the first mass beneficiaries of AI may not be village patients, schoolchildren, or public hospitals. They may be extortionists, fraud rings, political manipulators, exam cheats, fake recruiters, bot-farm operators, and that great recurring character in Indian public life: the small-time villain suddenly given industrial tooling.

The tragedy is not that the good use cases are imaginary. They are very real. Healthcare needs AI desperately, if desperately is allowed to mean “with architecture, governance, auditability, humility, and adult supervision.” India has remote populations underserved by clinicians, public hospitals drowning in queues, families ruined by diagnostic delay, mental health needs hidden under shame, and small clinics operating as lonely outposts of improvisation. A well-designed AI layer could help triage symptoms, translate medical language, flag drug interactions, detect missed follow-up, support community health workers, read basic imaging, improve care navigation, and reduce the grotesque information asymmetry between a frightened patient and a system that speaks in acronyms, forms, fees, and corridor instructions. In healthcare, the moral case for AI is not decorative. It is as practical as a clean syringe.

But a good use case does not automatically win. In weakly governed technical ecosystems, virtue is often slower than vice. A hospital AI system needs validation, consent, integration, monitoring, liability rules, clinical governance, cybersecurity, workflow design, and someone responsible at 2:13 a.m. when the model says something stupid with the confidence of a district magistrate. A scammer needs a phone, a voice sample, an account mule, a payment rail, and a victim’s fear. The asymmetry is brutal. Care requires institutions. Harm requires only opportunity.

This is why the fantasy of an instant Indian AI renaissance is so thin. Before any “singularity” rises upward into abundance, India may pass through a negative singularity of fraud density, trust collapse, and labor displacement. The phrase sounds theatrical, but the underlying pattern is sober. First, automation eats portions of clerical, support, content, coding, design, translation, tutoring, and service work. Second, the same displaced or underemployed talent pool discovers that deception has become cheaper, faster, and more scalable. Third, citizens who were already trained to trust authority symbols more than evidence are attacked with synthetic authority: fake police, fake courts, fake doctors, fake bank officials, fake relatives, fake news anchors, fake gurus, fake victims, fake outrage, fake intimacy. The result is not simply unemployment. It is unemployment plus a counterfeit reality industry.

Deepfake blackmail is particularly vicious in India because reputation is still a form of hostage-taking. A fabricated intimate image or video does not need to be technically perfect. It only needs to be plausible enough to terrify a victim before verification can occur. Families do not conduct forensic media analysis at breakfast. Employers do not wait for cryptographic provenance before gossip begins walking around the office with a cup of tea. In a society where shame can move faster than due process, synthetic media becomes a social weapon. The attacker’s power comes not from the pixels alone but from the victim’s expectation that nobody will believe them quickly enough.

Voice cloning adds a further ugliness because voice is one of the last badges of human presence ordinary people trust instinctively. A mother hears a son crying for money. A clerk hears a boss ordering a transfer. A patient hears a doctor’s assistant asking for documents. A bank customer hears a familiar tone wrapped around unfamiliar urgency. The old fraudster had to perform. The new fraudster can outsource performance to a model. This is not merely better mimicry. It is the automation of emotional leverage.

Bank fraud, too, changes character. The crude phishing link is not going away; India never retires a scam, it merely gives it younger accomplices. But AI makes the approach more personalized. Messages can be written in the victim’s language, tuned to their social class, polished for urgency, and adjusted in real time. Synthetic call-center scripts can imitate regional speech. Fake know-your-customer warnings can be made more official-looking. Fraud rings can generate endless variations faster than public awareness campaigns can print posters. Mule accounts, weak verification, insider compromise, and fragmented enforcement become the payment plumbing beneath the magic trick.

Disinformation is the larger civic disaster. India already has a rumor architecture: family WhatsApp groups, local political patronage, television shouting, religious grievance, caste suspicion, linguistic silos, and a population forced to navigate modernity with uneven scientific education. Add AI-generated faces, synthetic speeches, fabricated crowd scenes, cloned leaders, invented doctors, fake data visualizations, and bots that can converse rather than merely bark, and one gets a machine for manufacturing belief at village scale and national speed. The danger is not that everyone will believe everything. The danger is that enough people will believe enough false things at the right moments, while everyone else becomes too exhausted to know what is real.

Here the distinction between data transport and semantic meaning becomes central. A platform can transmit a video flawlessly without carrying any reliable meaning about its origin, consent, truth, or context. Transport says the packet arrived. Meaning asks whether the packet corresponds to reality. Indian digital life is filled with systems that confuse the first achievement for the second. A message delivered is not a fact established. A video rendered is not an event witnessed. A voice heard is not a person authenticated. A screenshot circulated is not evidence. The network moved the artifact; it did not certify the world.

Healthcare architects know this problem in a quieter costume. A Health Level Seven version 2 [HL7 v2, an older but still widely used healthcare messaging standard for moving clinical events between systems] admission message can move from one system to another while losing the workflow context that explains why the event occurred. A Fast Healthcare Interoperability Resources [FHIR, a modern healthcare interoperability standard that represents clinical concepts as modular resources] Observation can represent a lab result cleanly while still failing to capture whether the specimen was delayed, contaminated, repeated, clinically ignored, or entered under operational pressure. An Electronic Health Record [EHR, the primary digital system used to document patient care] can store a diagnosis code that means “suspected,” “ruled out,” “billing necessity,” “historical artifact,” or “true active disease,” depending on who entered it, when, and why. Representation is not reality. It is reality after being squeezed through workflow, incentives, templates, time pressure, and institutional habit.

That lesson now applies to society at large. AI does not attack only information systems. It attacks representational trust. It makes the artifact look like the thing. A fake voice looks like testimony. A fake image looks like evidence. A fake dashboard looks like analysis. A fake expert looks like authority. This is the non-obvious architectural insight: the central AI risk in India is not merely content generation; it is the collapse of ordinary authentication rituals. People have built their daily trust on weak signals—face, voice, letterhead, urgency, social forwarding, official tone, English fluency, family pressure, religious familiarity, bureaucratic vocabulary. AI can synthesize all of them.

Many of these failures will be mislabeled as “data quality” failures, just as healthcare failures often are. When a model trained on hospital data performs badly because the underlying record confuses billing codes with clinical truth, people say the data is dirty. Sometimes it is. But often the deeper problem is representational loss. The system captured something, but not the thing needed for the next use. A fraud database may record a complaint but not the coercive script. A bank record may show an authorized transfer but not the psychological manipulation that produced it. A social platform may store a video hash but not the consent status of the person depicted. A police report may classify a case under a legal category but not preserve the technical provenance needed to detect a larger network. Calling this “bad data” is like blaming a thermometer for not recording the patient’s childhood. The wrong model of the world was encoded.

Healthcare offers the better counterexample because the discipline, when done properly, is forced to care about provenance. Who entered this allergy? Was it patient-reported? Was it observed? Was it imported? Was it mapped from another terminology? Was it active when the medication was ordered? Did the alert fire? Was it overridden? Why? That chain is irritating, expensive, and often incomplete, but it points in the right direction. India’s AI safety architecture needs a similar provenance discipline for synthetic media, financial transactions, identity claims, and official communications. Not because provenance solves truth by itself, but because without it every artifact becomes a masked stranger at the door.

The practical implication is severe: design must move from content moderation to trust infrastructure. Labels alone will not save anyone. A label can be stripped, ignored, forged, mistranslated, or buried beneath outrage. What is needed is layered verification: cryptographic signing for official communications, strong provenance metadata for media capture and editing, friction on high-risk financial transfers, rapid account freezing across institutions, better mule-account detection, consent-aware image and voice policies, public reporting channels that actually respond, and default skepticism training that reaches people in Bengali, Hindi, Tamil, Telugu, Marathi, Malayalam, Assamese, Odia, Punjabi, and the hundred lived dialects where fraud does its best work. English-only safety advice in India is often a locked fire exit.

For healthcare AI specifically, the direction should be narrower and sterner. Build for supervised clinical augmentation, not theatrical replacement. Prefer boring systems that reduce missed care over dazzling systems that hallucinate empathy. Put AI where it can improve access without pretending to be a doctor: intake support, translation, follow-up reminders, risk stratification, medication reconciliation, referral routing, radiology prioritization, rural telehealth support, and patient education tied to verified clinical sources. Keep a human accountable. Log every recommendation. Measure harm, not only accuracy. Audit across language, caste, gender, geography, disability, and income proxies where legally and ethically appropriate. India does not need an AI godman in a white coat. It needs a careful, monitored, boringly useful clinical workhorse.

The constraint, of course, is that clean solutions are blocked by dirty realities. India’s healthcare data is fragmented across public hospitals, private chains, laboratories, pharmacies, insurers, paper records, WhatsApp images, personal memory, and the great invisible archive of “doctor ne bola tha.” Consent is poorly understood. Identity is inconsistently implemented. Rural connectivity varies. Clinical coding is uneven. Procurement rewards demos. Governance committees can be decorative. Cybersecurity budgets arrive after the breach. And many institutions still treat data as either private property, bureaucratic burden, or political risk rather than public-interest infrastructure under strict safeguards. Anyone proposing a clean national AI health layer without confronting this mess is selling marble flooring for a house without plumbing.

There is also the social constraint: people who want to use AI for repair may be disliked precisely because repair is slow, accountable, and unglamorous. The fraudster offers immediate money. The political manipulator offers immediate influence. The fake-content operator offers immediate power over another human being. The healthcare builder offers years of integration, testing, governance, and blame. In a culture increasingly trained to worship speed, spectacle, and jugaad, ethical architecture can look almost antisocial. It says no. It asks for logs. It demands consent. It refuses shortcuts. It spoils the party by noticing the exit doors are painted on.

But this is exactly why the unpopular proposition matters. AI for healthcare access is not sentimental charity. It is national resilience. A country whose citizens cannot trust images, voices, bank calls, exam results, medical advice, or public information will not become advanced merely because it has cheap models and ambitious slogans. It will become a bazaar of hallucinated authority. The antidote is not AI rejection. That would be both impossible and foolish. The antidote is institutional seriousness: authenticated channels, public technical literacy, enforceable penalties, healthcare-grade provenance, clinical governance, and systems that assume attackers are clever, users are frightened, and reality needs protection at the point of representation.

India’s near future may therefore split into two competing architectures. One architecture turns AI into a predatory multiplier: fewer jobs, more scams, cheaper blackmail, faster propaganda, synthetic humans selling synthetic certainty to real people with real losses. The other architecture turns AI into civic instrumentation: better triage, safer medication use, earlier diagnosis, language access, fraud detection, verified public communication, and tools that help honest workers do hard things under pressure. The technology can serve either. The deciding layer is not the model. It is governance, incentive, design, enforcement, and moral seriousness.

The singularity, if one insists on using that overfed word, will not arrive as a point of light. In India it may first arrive as a crowded railway platform where half the announcements are fake, the ticket clerk has been cloned, the doctor is unreachable, the bank wants another one-time password, the politician on the screen never said the thing he is saying, and a young man with three degrees is deciding whether to build a clinical triage tool or a blackmail funnel. That is the fork. Not man versus machine. Man versus the uses to which other men put the machine.

© 2026 Suvro Ghosh